Goto

Collaborating Authors

 additional continual learning experiment


We carried out an additional continual learning experiment on eight tasks (as in 1 [33, manuscript]) that consist of vision datasets with different domains: {CIFAR-10 / CIFAR-100 / MNIST / SVHN /

Neural Information Processing Systems

Figure 1: Results on 8 visually different tasks (left), comparison with Re-training (middle), and Atari RL (right). AlexNet and re-trains for each task with the entire training sets observed so far. We respectively disagree that our paper has only incremental contributions. In our opinion, "It is somehow Eq. (5) is a general expression that defines the proximal gradient descent. Our method can be also applied to the online continual learning setting.